Background. Commonmanufactured depth sensors generate depth images that humans normally obtain fromtheir eyes and hands.\nVarious designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory\nsubstitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device\n(MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound\nintensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along\nfour different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a\ntemporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both\nnew paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week\nto the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use\nof Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range.
Loading....